40 research outputs found

    Task search in a human computation market

    Get PDF
    In order to understand how a labor market for human computation functions, it is important to know how workers search for tasks. This paper uses two complementary methods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the rate of disappearance of tasks across key ways Mechanical Turk allows workers to sort tasks. Second, we present the results of a survey in which we paid workers for self-reported information about how they search for tasks. Our main findings are that on a large scale, workers sort by which tasks are most recently posted and which have the largest number of tasks available. Furthermore, we find that workers look mostly at the first page of the most recently posted tasks and the first two pages of the tasks with the most available instances but in both categories the position on the result page is unimportant to workers. We observe that at least some employers try to manipulate the position of their task in the search results to exploit the tendency to search for recently posted tasks. On an individual level, we observed workers searching by almost all the possible categories and looking more than 10 pages deep. For a task we posted to Mechanical Turk, we confirmed that a favorable position in the search results do matter: our task with favorable positioning was completed 30 times faster and for less money than when its position was unfavorable.National Science Foundation (U.S.). Integrative Graduate Education and Research Traineeship (Multidisciplinary Program in Inequality & Social Policy) (Grant Number 033340

    Ubiquitous text interaction

    Get PDF
    Computer-based interactions increasingly pervade our everyday environments. Be it on a mobile device, a wearable device, a wall-sized display, or an augmented reality device, interactive systems often rely on the consumption, composition, and manipulation of text. The focus of this workshop is on exploring the problems and opportunities of text interactions that are embedded in our environments, available all the time, and used by people who may be constrained by device, situation, or disability. This workshop welcomes all researchers interested in interactive systems that rely on text input or output. Participants should submit a short position statement outlining their background, past work, future plans, and suggesting a use-case they would like to explore in-depth during the workshop. During the workshop, small teams will form around common or compelling use-cases. Teams will spend time brainstorming, creating low-fidelity prototypes, and discussing their use-case with the group. Participants may optionally submit a technical paper for presentation as part of the workshop program. The workshop serves to sustain and build the community of text entry researchers who attend CHI. It provides an opportunity for new members to join this community, soliciting feedback from experts in a small and supportive environment

    Braille text entry on smartwatches : an evaluation of methods for composing the Braille cell

    Get PDF
    Smartwatches are gaining popularity on market with a set of features comparable to smartphones in a wearable device. This novice technology brings new interaction paradigms and challenges for blind users, who have difficulties dealing with touchscreens. Among a variety of tasks that must be studied, text entry is analyzed, considering that current existing solutions may be unsatisfactory (as voice input) or even unfeasible (as working with tiny QWERTY keyboards) for a blind user. More specifically, this paper presents a study on possible solutions for composing a Braille cell on smart-watches. Five prototypes were developed and different feedback features were proposed. These are confronted with seven specialists on an evaluation study that results in a qualitative analysis of which strategies can be more useful for blind users in a Braille text entry.Postprin

    Improving public transit accessibility for blind riders by crowdsourcing bus stop landmark locations with Google street view: An extended analysis

    Get PDF
    Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for an expected shelter, bench, or newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this article, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool, (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV, and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in nonvisual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5% accuracy across 150 bus stop locations (87.3% with simple quality control). </jats:p

    Improving public transit accessibility for blind riders by crowdsourcing bus stop landmark locations with Google street view

    Get PDF
    Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for a shelter, bench, newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this paper, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies in particular: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool; (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV; and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in non-visual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5 % accuracy across 150 bus stop locations (87.3 % with simple quality control)
    corecore